端到端(E2E)模型已成为最新语音识别系统的默认选择。此类型号经过大量标记数据的培训,这些数据通常无法用于低资源语言。诸如自我监督学习和转移学习的诺言之类的技术尚未在培训准确的模型中有效。另一方面,在各种域和扬声器集合上收集标记的数据集非常昂贵。在这项工作中,我们通过公共资料中的印度语言,特别是来自印度广播电台的公共档案馆的印度语言的``采矿''文本和音频对展示了这些方法的廉价和有效替代方案。作为关键组件,我们将Needleman-Wunsch算法调整为与相应的音频片段对齐句子,并给定长音频和其转录本的PDF,同时由于OCR,无关紧要的文本和未转录的语音而对错误进行了强大的态度。因此,我们创建了Shrutilipi,这是一个数据集,其中包含超过6,400个小时的12个印度语言标签的音频,总计为495万个句子。平均而言,Shrutilipi导致2.3倍增加了公开可用的标签数据。我们在12种语言中与21种人类评估者建立了Shrutilipi的质量。我们还根据代表区域,说话者和提到的实体建立了Shrutilipi的多样性。值得注意的是,我们表明,将Shrutilipi添加到WAV2VEC模型的训练集中,导致在Indicsuperb基准上的7种语言中,平均降低了5.8 \%。对于具有最多基准的印地语(7),平均水平从18.8%下降到13.5%。这种改进扩展到有效的模型:对于构象异构体模型(比WAV2VEC小10倍),我们显示出2.3%的下降。最后,我们通过证明对其进行训练的模型对嘈杂的输入更强大,证明了Shrutilipi的多样性。
translated by 谷歌翻译
AI研究中的基石是创建和采用标准化培训和测试数据集,以指定最新模型的进度。一个特别成功的例子是用于培训和评估英语自然语言理解(NLU)模型的胶水数据集。围绕基于BERT的语言模型的大量研究围绕着胶水中NLU任务的性能改进。为了评估其他语言的语言模型,创建了几个特定语言的胶水数据集。语音语言理解(SLU)的领域遵循了类似的轨迹。大型自我监督模型(例如WAV2VEC2)的成功实现了具有相对易于访问的未标记数据的语音模型。然后可以在SLU任务(例如出色的基准测试)上评估这些模型。在这项工作中,我们将其扩展到通过释放Indicsuperb基准测试来指示语言。具体来说,我们做出以下三项贡献。 (i)我们收集了Kathbath,其中包含来自印度203个地区的1,218个贡献者的12个印度语言的1,684小时的标记语音数据。 (ii)使用Kathbath,我们在6个语音任务中创建基准:自动语音识别,扬声器验证,说话者识别(单声道/多),语言识别,逐个示例查询以及对12种语言的关键字发现。 (iii)在发布的基准测试中,我们与常用的基线Fbank一起训练和评估不同的自我监督模型。我们表明,在大多数任务上,特定于语言的微调模型比基线更准确,包括对于语言识别任务的76 \%差距。但是,对于说话者识别,在大型数据集上训练的自我监督模型证明了一个优势。我们希望Indicsuperb有助于发展印度语言的语音语言理解模型的进步。
translated by 谷歌翻译
最近的言语和语言技术的方法预先rain非常大型模型,用于特定任务。然而,这种大型模型的好处通常仅限于世界上少数资源丰富的语言。在这项工作中,我们对来自印度次大陆的低资源语言构建ASR系统进行多种贡献。首先,我们从各种领域策划40个印度语言的17,000小时的原始语音数据,包括教育,新闻,技术和金融。其次,使用这种原始语音数据,我们预先存在于40个印度语言的Wav2Vec样式模型的多个变体。第三,我们分析佩带的模型以查找关键特点:码本矢量的类似探测音素在语言中共享,跨层的表示是语言系列的判别,并且注意力头通常会在小型本地窗口中注意。第四,我们微调了9种语言的下游ASR模型,并在3个公共数据集上获得最先进的结果,包括非常低的资源语言,如Sinhala和Nepali。我们的工作建立了多语言预介质是建立ASR系统的有效策略,为印度次大陆的语言上不同的扬声器建立ASR系统。
translated by 谷歌翻译
Detecting personal health mentions on social media is essential to complement existing health surveillance systems. However, annotating data for detecting health mentions at a large scale is a challenging task. This research employs a multitask learning framework to leverage available annotated data from a related task to improve the performance on the main task to detect personal health experiences mentioned in social media texts. Specifically, we focus on incorporating emotional information into our target task by using emotion detection as an auxiliary task. Our approach significantly improves a wide range of personal health mention detection tasks compared to a strong state-of-the-art baseline.
translated by 谷歌翻译
The health mention classification (HMC) task is the process of identifying and classifying mentions of health-related concepts in text. This can be useful for identifying and tracking the spread of diseases through social media posts. However, this is a non-trivial task. Here we build on recent studies suggesting that using emotional information may improve upon this task. Our study results in a framework for health mention classification that incorporates affective features. We present two methods, an intermediate task fine-tuning approach (implicit) and a multi-feature fusion approach (explicit) to incorporate emotions into our target task of HMC. We evaluated our approach on 5 HMC-related datasets from different social media platforms including three from Twitter, one from Reddit and another from a combination of social media sources. Extensive experiments demonstrate that our approach results in statistically significant performance gains on HMC tasks. By using the multi-feature fusion approach, we achieve at least a 3% improvement in F1 score over BERT baselines across all datasets. We also show that considering only negative emotions does not significantly affect performance on the HMC task. Additionally, our results indicate that HMC models infused with emotional knowledge are an effective alternative, especially when other HMC datasets are unavailable for domain-specific fine-tuning. The source code for our models is freely available at https://github.com/tahirlanre/Emotion_PHM.
translated by 谷歌翻译
Spatial perception is a key task in several robotics applications. In general, it involves the nonlinear estimation of hidden variables that represent the state of the robot/environment. However, in the presence of outliers the standard nonlinear least squared formulation results in poor estimates. Several methods have been considered in the literature to improve the reliability of the estimation process. Most methods are based on heuristics since guaranteed global robust estimation is not generally practical due to high computational costs. Recently general purpose robust estimation heuristics have been proposed that leverage existing non-minimal solvers available for the outlier-free formulations without the need for an initial guess. In this work, we propose two similar heuristics backed by Bayesian theory. We evaluate these heuristics in practical scenarios to demonstrate their merits in different applications including 3D point cloud registration, mesh registration and pose graph optimization.
translated by 谷歌翻译
Machine learning algorithms typically assume that the training and test samples come from the same distributions, i.e., in-distribution. However, in open-world scenarios, streaming big data can be Out-Of-Distribution (OOD), rendering these algorithms ineffective. Prior solutions to the OOD challenge seek to identify invariant features across different training domains. The underlying assumption is that these invariant features should also work reasonably well in the unlabeled target domain. By contrast, this work is interested in the domain-specific features that include both invariant features and features unique to the target domain. We propose a simple yet effective approach that relies on correlations in general regardless of whether the features are invariant or not. Our approach uses the most confidently predicted samples identified by an OOD base model (teacher model) to train a new model (student model) that effectively adapts to the target domain. Empirical evaluations on benchmark datasets show that the performance is improved over the SOTA by ~10-20%
translated by 谷歌翻译
Low-rank and sparse decomposition based methods find their use in many applications involving background modeling such as clutter suppression and object tracking. While Robust Principal Component Analysis (RPCA) has achieved great success in performing this task, it can take hundreds of iterations to converge and its performance decreases in the presence of different phenomena such as occlusion, jitter and fast motion. The recently proposed deep unfolded networks, on the other hand, have demonstrated better accuracy and improved convergence over both their iterative equivalents as well as over other neural network architectures. In this work, we propose a novel deep unfolded spatiotemporal RPCA (DUST-RPCA) network, which explicitly takes advantage of the spatial and temporal continuity in the low-rank component. Our experimental results on the moving MNIST dataset indicate that DUST-RPCA gives better accuracy when compared with the existing state of the art deep unfolded RPCA networks.
translated by 谷歌翻译
Early evaluation of patients who require special care and who have high death-expectancy in COVID-19, and the effective determination of relevant biomarkers on large sample-groups are important to reduce mortality. This study aimed to reveal the routine blood-value predictors of COVID-19 mortality and to determine the lethal-risk levels of these predictors during the disease process. The dataset of the study consists of 38 routine blood-values of 2597 patients who died (n = 233) and those who recovered (n = 2364) from COVID-19 in August-December, 2021. In this study, the histogram-based gradient-boosting (HGB) model was the most successful machine-learning classifier in detecting living and deceased COVID-19 patients (with squared F1 metrics F1^2 = 1). The most efficient binary combinations with procalcitonin were obtained with D-dimer, ESR, D-Bil and ferritin. The HGB model operated with these feature pairs correctly detected almost all of the patients who survived and those who died (precision > 0.98, recall > 0.98, F1^2 > 0.98). Furthermore, in the HGB model operated with a single feature, the most efficient features were procalcitonin (F1^2 = 0.96) and ferritin (F1^2 = 0.91). In addition, according to the two-threshold approach, ferritin values between 376.2 mkg/L and 396.0 mkg/L (F1^2 = 0.91) and pro-calcitonin values between 0.2 mkg/L and 5.2 mkg/L (F1^2 = 0.95) were found to be fatal risk levels for COVID-19. Considering all the results, we suggest that many features combined with these features, especially procalcitonin and ferritin, operated with the HGB model, can be used to achieve very successful results in the classification of those who live, and those who die from COVID-19. Moreover, we strongly recommend that clinicians consider the critical levels we have found for procalcitonin and ferritin properties, to reduce the lethality of the COVID-19 disease.
translated by 谷歌翻译
恢复质量差的图像与一组混合伪影对于可靠的诊断起着至关重要的作用。现有的研究集中在特定的恢复问题上,例如图像过度,去核和暴露校正,通常对伪影类型和严重性有很强的假设。作为盲X射线恢复的先驱研究,我们提出了一个通用图像恢复和分类的联合模型:恢复分类为分类的生成对抗网络(R2C-GAN)。这种共同优化的模型使恢复后保持任何疾病完整。因此,由于X射线图像质量的提高,这自然会导致更高的诊断性能。为了实现这一关键目标,我们将恢复任务定义为图像到图像的翻译问题,从差异,模糊或暴露不足/暴露不足的图像到高质量的图像域。提出的R2C-GAN模型能够使用未配对的训练样本在两个域之间学习前进和逆变换。同时,联合分类在恢复过程中保留了疾病标签。此外,R2C-GAN配备了操作层/神经元,可降低网络深度,并进一步增强恢复和分类性能。拟议的联合模型对2019年冠状病毒病(COVID-19)分类的卡塔-COV19数据集进行了广泛的评估。拟议的恢复方法达到了90%以上的F1得分,这显着高于任何深层模型的性能。此外,在定性分析中,R2C-GAN的恢复性能得到了一群医生的批准。我们在https://github.com/meteahishali/r2c-gan上共享软件实施。
translated by 谷歌翻译